Leveraging Passage Retrieval with Generative Models for Open Domain Question Answering
Generative models for open domain question answering have proven to be competitive, without resorting to external knowledge. While promising, this approach requires to use models with billions of parameters, which are expensive to train and query. In this paper, we investigate how much these models can benefit from retrieving text passages, potentially containing evidence. We obtain state-of-the-art results on the Natural Questions and TriviaQA open benchmarks. Interestingly, we observe that the performance of this method significantly improves when increasing the number of retrieved passages. This is evidence that generative models are good at aggregating and combining evidence from multiple passages.
(DeepL) Generative models for open domain question answering have proven to be competitive without relying on outside knowledge. While this approach is promising, it requires the use of models with billions of parameters and is expensive to train and query. In this paper, we investigate how much these models can benefit from retrieving potentially evidence-laden text. We obtain state-of-the-art results on the Natural Questions and TriviaQA open benchmarks. Interestingly, we observed that increasing the number of sentences searched significantly improves the performance of this approach. This is evidence that the generative model is adept at aggregating and combining evidence from multiple sentences.
---